97 research outputs found
SACR: Scheduling-Aware Cache Reconfiguration for Real-Time Embedded Systems
Dynamic reconfiguration techniques are widely used for efficient system optimization. Dynamic cache reconfiguration is a promising approach for reducing energy consumption as well as for improving overall system performance. It is a major challenge to introduce cache reconfiguration into real-time embedded systems since dynamic analysis may adversely affect tasks with real-time constraints. This paper presents a novel approach for implementing cache reconfiguration in soft real-time systems by efficiently leveraging static analysis during execution to both minimize energy and maximize performance. To the best of our knowledge, this is the first attempt to integrate dynamic cache reconfiguration in real-time scheduling techniques. Our experimental results using a wide variety of applications have demonstrated that our approach can significantly (up to 74%) reduce the overall energy consumption of the cache hierarchy in soft real-time systems. 1
Efficient Deep Reinforcement Learning via Adaptive Policy Transfer
Transfer Learning (TL) has shown great potential to accelerate Reinforcement
Learning (RL) by leveraging prior knowledge from past learned policies of
relevant tasks. Existing transfer approaches either explicitly computes the
similarity between tasks or select appropriate source policies to provide
guided explorations for the target task. However, how to directly optimize the
target policy by alternatively utilizing knowledge from appropriate source
policies without explicitly measuring the similarity is currently missing. In
this paper, we propose a novel Policy Transfer Framework (PTF) to accelerate RL
by taking advantage of this idea. Our framework learns when and which source
policy is the best to reuse for the target policy and when to terminate it by
modeling multi-policy transfer as the option learning problem. PTF can be
easily combined with existing deep RL approaches. Experimental results show it
significantly accelerates the learning process and surpasses state-of-the-art
policy transfer methods in terms of learning efficiency and final performance
in both discrete and continuous action spaces.Comment: Accepted by IJCAI'202
Multi-Agent Game Abstraction via Graph Attention Neural Network
In large-scale multi-agent systems, the large number of agents and complex
game relationship cause great difficulty for policy learning. Therefore,
simplifying the learning process is an important research issue. In many
multi-agent systems, the interactions between agents often happen locally,
which means that agents neither need to coordinate with all other agents nor
need to coordinate with others all the time. Traditional methods attempt to use
pre-defined rules to capture the interaction relationship between agents.
However, the methods cannot be directly used in a large-scale environment due
to the difficulty of transforming the complex interactions between agents into
rules. In this paper, we model the relationship between agents by a complete
graph and propose a novel game abstraction mechanism based on two-stage
attention network (G2ANet), which can indicate whether there is an interaction
between two agents and the importance of the interaction. We integrate this
detection mechanism into graph neural network-based multi-agent reinforcement
learning for conducting game abstraction and propose two novel learning
algorithms GA-Comm and GA-AC. We conduct experiments in Traffic Junction and
Predator-Prey. The results indicate that the proposed methods can simplify the
learning process and meanwhile get better asymptotic performance compared with
state-of-the-art algorithms.Comment: Accepted by AAAI202
From Few to More: Large-scale Dynamic Multiagent Curriculum Learning
A lot of efforts have been devoted to investigating how agents can learn
effectively and achieve coordination in multiagent systems. However, it is
still challenging in large-scale multiagent settings due to the complex
dynamics between the environment and agents and the explosion of state-action
space. In this paper, we design a novel Dynamic Multiagent Curriculum Learning
(DyMA-CL) to solve large-scale problems by starting from learning on a
multiagent scenario with a small size and progressively increasing the number
of agents. We propose three transfer mechanisms across curricula to accelerate
the learning process. Moreover, due to the fact that the state dimension varies
across curricula,, and existing network structures cannot be applied in such a
transfer setting since their network input sizes are fixed. Therefore, we
design a novel network structure called Dynamic Agent-number Network (DyAN) to
handle the dynamic size of the network input. Experimental results show that
DyMA-CL using DyAN greatly improves the performance of large-scale multiagent
learning compared with state-of-the-art deep reinforcement learning approaches.
We also investigate the influence of three transfer mechanisms across curricula
through extensive simulations.Comment: Accepted by AAAI202
- …